skip to main content


Search for: All records

Creators/Authors contains: "Anam, Amrita"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. With the increase of natural disasters all over the world, we are in crucial need of innovative solutions with inexpensive implementations to assist the emergency response systems. Information collected through conventional sources (e.g., incident reports, 911 calls, physical volunteers, etc.) are proving to be insufficient [1]. Responsible organizations are now leaning towards research grounds that explore digital human connectivity and freely available sources of information. U.S. Geological Survey and Federal Emergency Management Agency (FEMA) introduced Critical Lifeline (CLL) s which identifies the most significant areas that require immediate attention in case of natural disasters. These organizations applied crowdsourcing by connecting digital volunteer networks to collect data on the critical lifelines from data sources including social media [3], [4], [5]. In the past couple of years, during some of the deadly hurricanes (e.g., Harvey, IRMA, Maria, Michael, Florence, etc.), people took on different social media platforms like never seen before, in search of help for rescue, shelter, and relief. Their posts reflect crisis updates and their real-time observations on the devastation that they witness. In this paper, we propose a methodology to build and analyze time-frequency features of words on social media to assist the volunteer networks in identifying the context before, during and after a natural disaster and distinguishing contexts connected to the critical lifelines. We employ Continuous Wavelet Transform to help create word features and propose two ways to reduce the dimensions which we use to create word clusters to identify themes of conversations associated with stages of a disaster and these lifelines. We compare two different methodologies of wavelet features and word clusters both qualitatively and quantitatively, to show that wavelet features can identify and separate context without using semantic information as inputs. 
    more » « less
  2. For over a decade, social media has proved to be a functional and convenient data source in the Internet of things. Social platforms such as Facebook, Twitter, Instagram, and Reddit have their own styles and purposes. Twitter, among them, has become the most popular platform in the research community due to its nature of attracting people to write brief posts about current and unexpected events (e.g., natural disasters). The immense popularity of such sites has opened a new horizon in `social sensing' to manage disaster response. Sensing through social media platforms can be used to track and analyze natural disasters and evaluate the overall response (e.g., resource allocation, relief, cost and damage estimation). In this paper, we propose a two-step methodology: i) wavelet analysis and ii) predictive modeling to track the progression of a disaster aftermath and predict the time-line. We demonstrate that wavelet features can preserve text semantics and predict the total duration for localized small scale disasters. The experimental results and observations on two real data traces (flash flood in Cummins Falls state park and Arizona swimming hole) showcase that the wavelet features can predict disaster time-line with an error lower than 20% with less than 50% of the data when compared to ground truth. 
    more » « less
  3. Computer Vision and Image Processing are emerging research paradigms. The increasing popularity of social media, micro- blogging services and ubiquitous availability of high-resolution smartphone cameras with pervasive connectivity are propelling our digital footprints and cyber activities. Such online human footprints related with an event-of-interest, if mined appropriately, can provide meaningful information to analyze the current course and pre- and post- impact leading to the organizational planning of various real-time smart city applications. In this paper, we investigate the narrative (texts) and visual (images) components of Twitter feeds to improve the results of queries by exploiting the deep contexts of each data modality. We employ Latent Semantic Analysis (LSA)-based techniques to analyze the texts and Discrete Cosine Transformation (DCT) to analyze the images which help establish the cross-correlations between the textual and image dimensions of a query. While each of the data dimensions helps improve the results of a specific query on its own, the contributions from the dual modalities can potentially provide insights that are greater than what can be obtained from the individual modalities. We validate our proposed approach using real Twitter feeds from a recent devastating flash flood in Ellicott City near the University of Maryland campus. Our results show that the images and texts can be classified with 67% and 94% accuracies respectively. 
    more » « less